If we wanted to study climate change, we can find data on the Combined Land-Surface Air and Sea-Surface Water Temperature Anomalies in the Northern Hemisphere at NASA’s Goddard Institute for Space Studies. The tabular data of temperature anomalies can be found here
To define temperature anomalies you need to have a reference, or base, period which NASA clearly states that it is the period between 1951-1980.
Run the code below to load the file:
weather <-
read_csv("https://data.giss.nasa.gov/gistemp/tabledata_v4/NH.Ts+dSST.csv",
skip = 1,
na = "***")Notice that, when using this function, we added two options: skip and na.
skip=1 option is there as the real data table only starts in Row 2, so we need to skip one row.na = "***" option informs R how missing observations in the spreadsheet are coded. When looking at the spreadsheet, you can see that missing data is coded as "***". It is best to specify this here, as otherwise some of the data is not recognized as numeric data.Once the data is loaded, notice that there is a object titled weather in the Environment panel. If you cannot see the panel (usually on the top-right), go to Tools > Global Options > Pane Layout and tick the checkbox next to Environment. Click on the weather object, and the dataframe will pop up on a seperate tab. Inspect the dataframe.
For each month and year, the dataframe shows the deviation of temperature from the normal (expected). Further the dataframe is in wide format.
You have two objectives in this section:
Select the year and the twelve month variables from the weather dataset. We do not need the others (J-D, D-N, DJF, etc.) for this assignment. Hint: use select() function.
Convert the dataframe from wide to ‘long’ format. Hint: use gather() or pivot_longer() function. Name the new dataframe as tidyweather, name the variable containing the name of the month as month, and the temperature deviation values as delta.
tidyweather <- weather %>%
select(Year, Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec) %>%
pivot_longer(cols = 2:13, #columns 3 to 5
names_to = "Month",
values_to = "delta")Inspect your dataframe. It should have three variables now, one each for
Let us plot the data using a time-series scatter plot, and add a trendline. To do that, we first need to create a new variable called date in order to ensure that the delta values are plot chronologically.
In the following chunk of code, I used the
eval=FALSEargument, which does not run a chunk of code; I did so that you can knit the document before tidying the data and creating a new dataframetidyweather. When you actually want to run this code and knit your document, you must deleteeval=FALSE, not just here but in all chunks wereeval=FALSEappears.
tidyweather <- tidyweather %>%
mutate(date = ymd(paste(as.character(Year), Month, "1")),
month = month(date, label=TRUE),
year = year(date))
ggplot(tidyweather, aes(x=date, y = delta))+
geom_point()+
geom_smooth(color="red") +
theme_bw() +
labs (
title = "Weather Anomalies"
)Is the effect of increasing temperature more pronounced in some months? Use facet_wrap() to produce a seperate scatter plot for each month, again with a smoothing line. Your chart should human-readable labels; that is, each month should be labeled “Jan”, “Feb”, “Mar” (full or abbreviated month names are fine), not 1, 2, 3.
#Your code goes here...
ggplot(tidyweather, aes(x=date, y = delta))+
geom_point()+
geom_smooth(color="red") +
theme_bw() +
labs (
title = "Weather Anomalies"
) +
facet_wrap(~factor(month, levels = unique(month)))The above-depicted scatter plots clearly depict that the effect of increasing temperature is more pronounced in Q1 and Q4, particularly November, December, February and March. This can potentially be linked to climate change associated extreme weather conditions leading to heatwaves and relatively higher tempurates around the world.
It is sometimes useful to group data into different time periods to study historical data. For example, we often refer to decades such as 1970s, 1980s, 1990s etc. to refer to a period of time. NASA calcuialtes a temperature anomaly, as difference form the base periof of 1951-1980. The code below creates a new data frame called comparison that groups data in five time periods: 1881-1920, 1921-1950, 1951-1980, 1981-2010 and 2011-present.
We remove data before 1800 and before using filter. Then, we use the mutate function to create a new variable interval which contains information on which period each observation belongs to. We can assign the different periods using case_when().
comparison <- tidyweather %>%
filter(Year>= 1881) %>% #remove years prior to 1881
#create new variable 'interval', and assign values based on criteria below:
mutate(interval = case_when(
Year %in% c(1881:1920) ~ "1881-1920",
Year %in% c(1921:1950) ~ "1921-1950",
Year %in% c(1951:1980) ~ "1951-1980",
Year %in% c(1981:2010) ~ "1981-2010",
TRUE ~ "2011-present"
))Inspect the comparison dataframe by clicking on it in the Environment pane.
Now that we have the interval variable, we can create a density plot to study the distribution of monthly deviations (delta), grouped by the different time periods we are interested in. Set fill to interval to group and colour the data by different time periods.
ggplot(comparison, aes(x=delta, fill=interval))+
geom_density(alpha=0.2) + #density plot with tranparency set to 20%
theme_bw() + #theme
labs (
title = "Density Plot for Monthly Temperature Anomalies",
y = "Density" #changing y-axis label to sentence case
)So far, we have been working with monthly anomalies. However, we might be interested in average annual anomalies. We can do this by using group_by() and summarise(), followed by a scatter plot to display the result.
#creating yearly averages
average_annual_anomaly <- tidyweather %>%
group_by(Year) %>% #grouping data by Year
# creating summaries for mean delta
# use `na.rm=TRUE` to eliminate NA (not available) values
summarise(annual_average_delta = mean(delta, na.rm=TRUE))
#plotting the data:
ggplot(average_annual_anomaly, aes(x=Year, y= annual_average_delta))+
geom_point()+
#Fit the best fit line, using LOESS method
geom_smooth(method = 'loess') +
#change to theme_bw() to have white background + black frame around plot
theme_bw() +
labs (
title = "Average Yearly Anomaly",
y = "Average Annual Delta"
) deltaNASA points out on their website that
A one-degree global change is significant because it takes a vast amount of heat to warm all the oceans, atmosphere, and land by that much. In the past, a one- to two-degree drop was all it took to plunge the Earth into the Little Ice Age.
Your task is to construct a confidence interval for the average annual delta since 2011, both using a formula and using a bootstrap simulation with the infer package. Recall that the dataframe comparison has already grouped temperature anomalies according to time intervals; we are only interested in what is happening between 2011-present.
formula_ci <- comparison %>%
# choose the interval 2011-present
# what dplyr verb will you use?
filter(interval == "2011-present") %>%
# calculate summary statistics for temperature deviation (delta)
# calculate mean, SD, count, SE, lower/upper 95% CI
# what dplyr verb will you use?
summarise(mean_variation = mean(delta, na.rm=TRUE),
sd_variation = sd(delta, na.rm=TRUE),
count_variation = n(),
se_variation = sd_variation / sqrt(count_variation),
t_critical = qt(0.975, count_variation - 1),
lower_95p_variation = mean_variation - t_critical * se_variation,
upper_95p_variation = mean_variation + t_critical * se_variation
)
#print out formula_CI
formula_ci## # A tibble: 1 × 7
## mean_variation sd_variation count_variation se_variation t_critical
## <dbl> <dbl> <int> <dbl> <dbl>
## 1 1.06 0.274 132 0.0239 1.98
## # … with 2 more variables: lower_95p_variation <dbl>, upper_95p_variation <dbl>
# use the infer package to construct a 95% CI for delta
boot_delta <- comparison %>%
filter(interval == "2011-present") %>%
specify(response = delta) %>%
generate(reps=1000, type="bootstrap") %>%
calculate(stat="mean")
percentile_ci <- boot_delta %>%
get_confidence_interval(level = 0.95, type="percentile")
percentile_ci## # A tibble: 1 × 2
## lower_ci upper_ci
## <dbl> <dbl>
## 1 1.01 1.11
## # A tibble: 1 × 2
## lower_95p_variation upper_95p_variation
## <dbl> <dbl>
## 1 1.01 1.11
What is the data showing us? Please type your answer after (and outside!) this blockquote. You have to explain what you have done, and the interpretation of the result. One paragraph max, please!
The results is the same up to at least two decimal points. Using the formula (first part), we came to a lower 95% confidence interval of 1.01 and an upper boundary of 1.11, which means that we are 95% confident that additional results would be inside that range (i.e. temperature deviation would be in this range). Using the bootstrapping method, we simulate 1000 datasets from our sample to get a new distribution that helps us to set these upper and lower boundaries.
A 2010 Pew Research poll asked 1,306 Americans, “From what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past few decades, or not?”
In this exercise we analyze whether there are any differences between the proportion of people who believe the earth is getting warmer and their political ideology. As usual, from the survey sample data, we will use the proportions to estimate values of population parameters. The file has 2253 observations on the following 2 variables:
party_or_ideology: a factor (categorical) variable with levels Conservative Republican, Liberal Democrat, Mod/Cons Democrat, Mod/Lib Republicanresponse : whether the respondent believes the earth is warming or not, or Don’t know/ refuse to answerYou will also notice that many responses should not be taken into consideration, like “No Answer”, “Don’t Know”, “Not applicable”, “Refused to Answer”.
global_warming_simple <- global_warming_pew %>%
filter(response != "Don't know / refuse to answer") %>%
group_by(party_or_ideology) %>%
#Mutate used to merge Mod/Cons Democrat and Mod/Lib Republican and therefore create 3 CI's
mutate(party_or_ideology = case_when(
grepl('Mod', party_or_ideology, TRUE) ~ 'Moderate_Domocrat_and_republican',
TRUE ~ party_or_ideology)) %>%
summarise(warming = sum(response == "Earth is warming"),
total = n())
global_warming_simple %>%
rowwise() %>%
mutate(lower_ci = prop.test(warming, total, conf.level=.95)$conf.int[1],
upper_ci = prop.test(warming, total, conf.level=.95)$conf.int[2])## # A tibble: 3 × 5
## # Rowwise:
## party_or_ideology warming total lower_ci upper_ci
## <chr> <int> <int> <dbl> <dbl>
## 1 Conservative Republican 248 698 0.320 0.392
## 2 Liberal Democrat 405 428 0.919 0.965
## 3 Moderate_Domocrat_and_republican 698 991 0.675 0.732
We will be constructing three 95% confidence intervals to estimate population parameters, for the % who believe that Earth is warming, accoridng to their party or ideology. You can create the CIs using the formulas by hand, or use prop.test()– just rememebr to exclude the Dont know / refuse to answer!
Does it appear that whether or not a respondent believes the earth is warming is independent of their party ideology? You may want to
The aforementioned table depicts that climate beleifs, in particular whether or not a respondent believes the earth is warming, is indeed influenced by the respondants party or ideology. The 95% confidence intervals for average percentage of those believe in earth-warming seem to differ a lot among parties. The data suggests that, relatively, Liberal Democrats are more likely to believe that earth warming is true and conservative republicans are least likely to believe in earthc warming.
You may want to read on The challenging politics of climate change
As we saw in class, fivethirtyeight.com has detailed data on all polls that track the president’s approval
# Import approval polls data directly off fivethirtyeight website
approval_polllist <- read_csv('https://projects.fivethirtyeight.com/biden-approval-data/approval_polllist.csv')
glimpse(approval_polllist)## Rows: 1,819
## Columns: 22
## $ president <chr> "Joseph R. Biden Jr.", "Joseph R. Biden Jr.", "Jos…
## $ subgroup <chr> "All polls", "All polls", "All polls", "All polls"…
## $ modeldate <chr> "10/1/2021", "10/1/2021", "10/1/2021", "10/1/2021"…
## $ startdate <chr> "1/19/2021", "1/19/2021", "1/20/2021", "1/20/2021"…
## $ enddate <chr> "1/21/2021", "1/21/2021", "1/21/2021", "1/21/2021"…
## $ pollster <chr> "Rasmussen Reports/Pulse Opinion Research", "Morni…
## $ grade <chr> "B", "B", "B+", "B", "B-", "B", "B+", "B-", "B", "…
## $ samplesize <dbl> 1500, 15000, 1516, 1993, 1115, 15000, 941, 1200, 1…
## $ population <chr> "lv", "a", "a", "rv", "a", "a", "rv", "rv", "lv", …
## $ weight <dbl> 0.3382, 0.2594, 1.2454, 0.0930, 1.1014, 0.2333, 1.…
## $ influence <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ approve <dbl> 48, 50, 45, 56, 55, 51, 63, 58, 48, 52, 53, 55, 48…
## $ disapprove <dbl> 45, 28, 28, 31, 32, 28, 37, 32, 47, 29, 29, 33, 47…
## $ adjusted_approve <dbl> 50.5, 48.6, 46.5, 54.6, 53.9, 49.6, 58.7, 56.9, 50…
## $ adjusted_disapprove <dbl> 38.8, 31.3, 28.4, 34.3, 32.9, 31.3, 38.0, 33.1, 40…
## $ multiversions <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA…
## $ tracking <lgl> TRUE, TRUE, NA, NA, NA, TRUE, NA, NA, TRUE, TRUE, …
## $ url <chr> "https://www.rasmussenreports.com/public_content/p…
## $ poll_id <dbl> 74247, 74272, 74327, 74246, 74248, 74273, 74256, 7…
## $ question_id <dbl> 139395, 139491, 139570, 139394, 139404, 139492, 13…
## $ createddate <chr> "1/22/2021", "1/28/2021", "2/2/2021", "1/22/2021",…
## $ timestamp <chr> "10:23:09 1 Oct 2021", "10:23:09 1 Oct 2021", "1…
# Use `lubridate` to fix dates, as they are given as characters.
approval_polllist_date <- approval_polllist %>%
mutate(modeldate = mdy(modeldate),
startdate = mdy(startdate),
enddate = mdy(enddate),
createddate = mdy(createddate)
)
approval_polllist_date## # A tibble: 1,819 × 22
## president subgroup modeldate startdate enddate pollster grade samplesize
## <chr> <chr> <date> <date> <date> <chr> <chr> <dbl>
## 1 Joseph R… All pol… 2021-10-01 2021-01-19 2021-01-21 Rasmuss… B 1500
## 2 Joseph R… All pol… 2021-10-01 2021-01-19 2021-01-21 Morning… B 15000
## 3 Joseph R… All pol… 2021-10-01 2021-01-20 2021-01-21 YouGov B+ 1516
## 4 Joseph R… All pol… 2021-10-01 2021-01-20 2021-01-21 Morning… B 1993
## 5 Joseph R… All pol… 2021-10-01 2021-01-20 2021-01-21 Ipsos B- 1115
## 6 Joseph R… All pol… 2021-10-01 2021-01-20 2021-01-22 Morning… B 15000
## 7 Joseph R… All pol… 2021-10-01 2021-01-21 2021-01-22 HarrisX B+ 941
## 8 Joseph R… All pol… 2021-10-01 2021-01-21 2021-01-23 RMG Res… B- 1200
## 9 Joseph R… All pol… 2021-10-01 2021-01-20 2021-01-24 Rasmuss… B 1500
## 10 Joseph R… All pol… 2021-10-01 2021-01-21 2021-01-23 Morning… B 15000
## # … with 1,809 more rows, and 14 more variables: population <chr>,
## # weight <dbl>, influence <dbl>, approve <dbl>, disapprove <dbl>,
## # adjusted_approve <dbl>, adjusted_disapprove <dbl>, multiversions <chr>,
## # tracking <lgl>, url <chr>, poll_id <dbl>, question_id <dbl>,
## # createddate <date>, timestamp <chr>
What I would like you to do is to calculate the average net approval rate (approve- disapprove) for each week since he got into office. I want you plot the net approval, along with its 95% confidence interval. There are various dates given for each poll, please use enddate, i.e., the date the poll ended.
Also, please add an orange line at zero. Your plot should look like this:
approval_summary <- approval_polllist_date %>%
arrange(enddate) %>%
filter(enddate >= "2021-03-01") %>%
group_by(week = isoweek(enddate)) %>%
mutate(net_margin = approve - disapprove) %>%
summarise(mean_appr_disappr = mean(approve - disapprove, na.rm=TRUE),
sd_appr = sd(approve - disapprove, na.rm = TRUE),
se_appr = sd_appr / sqrt(n()),
t_critical = qt(0.975, n() - 1),
lower_95p_appr = mean_appr_disappr - t_critical * se_appr,
upper_95p_appr = mean_appr_disappr + t_critical * se_appr)
approval_summary## # A tibble: 31 × 7
## week mean_appr_disappr sd_appr se_appr t_critical lower_95p_appr
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 9 12.9 7.24 1.07 2.01 10.7
## 2 10 14.3 7.38 1.05 2.01 12.2
## 3 11 16.1 8.67 1.28 2.01 13.5
## 4 12 14.0 8.52 1.14 2.00 11.7
## 5 13 13.0 8.57 1.24 2.01 10.5
## 6 14 12.6 8.39 1.34 2.02 9.90
## 7 15 12.1 6.86 0.934 2.01 10.2
## 8 16 13.3 6.71 0.819 2.00 11.7
## 9 17 14.2 8.47 1.14 2.00 11.9
## 10 18 14.0 7.57 1.03 2.01 11.9
## # … with 21 more rows, and 1 more variable: upper_95p_appr <dbl>
ggplot(approval_summary, aes(x=week, y=mean_appr_disappr, color="red")) +
geom_point() +
geom_line() +
geom_ribbon(aes(ymin=lower_95p_appr, ymax=upper_95p_appr), alpha=0.2) +
geom_smooth(method="loess", level=0, color="blue") +
theme_bw() +
theme(legend.position = "none") +
geom_hline(yintercept = 0, color="orange") +
labs(x = "Week of the year",
y = "Average Approval Margin (Approve - Disapprove)",
title = "Estimating Approval Margin (approve - disapprove) for Joe Biden",
subtitle = "Weekly average of all polls")We excluded everything before week 8 because we only have relatively lower observations in week 7, which leads to a relatively large confidence interval.
Compare the confidence intervals for week 3 and week 25. Can you explain what’s going on? One paragraph would be enough.
confidence interval for week 8 and week 3 (not included in the graph) is much wider than that of week 25 as there is not enough sample data from week 8 and therefore the variance of margins across different samples in week 25 is small. (please also note - Referring to the question asked on Slack, as we have different data sets from the original graph, the interpretaion might vary as well)
Recall the TfL data on how many bikes were hired every single day. We can get the latest data by running the following
url <- "https://data.london.gov.uk/download/number-bicycle-hires/ac29363e-e0cb-47cc-a97a-e216d900a6b0/tfl-daily-cycle-hires.xlsx"
# Download TFL data to temporary file
httr::GET(url, write_disk(bike.temp <- tempfile(fileext = ".xlsx")))## Response [https://airdrive-secure.s3-eu-west-1.amazonaws.com/london/dataset/number-bicycle-hires/2021-09-23T12%3A52%3A20/tfl-daily-cycle-hires.xlsx?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJJDIMAIVZJDICKHA%2F20211003%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20211003T214741Z&X-Amz-Expires=300&X-Amz-Signature=8b7e26d73d2c2d9738f9cab44e2a4a6914ad8adf85b0dfb36762db9b22911cb3&X-Amz-SignedHeaders=host]
## Date: 2021-10-03 21:52
## Status: 200
## Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
## Size: 174 kB
## <ON DISK> /var/folders/lp/pk7fq3n95sb6vdn3c_zq522w0000gn/T//RtmpBPuaeT/file12bf663aa0a37.xlsx
# Use read_excel to read it as dataframe
bike0 <- read_excel(bike.temp,
sheet = "Data",
range = cell_cols("A:B"))
# change dates to get year, month, and week
bike <- bike0 %>%
clean_names() %>%
rename (bikes_hired = number_of_bicycle_hires) %>%
mutate (year = year(day),
month = lubridate::month(day, label = TRUE),
week = isoweek(day)) %>%
filter(year >= 2015)We can easily create a facet grid that plots bikes hired by month and year.
ggplot(bike, aes(bikes_hired)) +
geom_density(scales = "free") +
facet_grid(rows = vars(year), cols = vars(month)) +
labs(x = "Bike Rentals",
y = "",
title = "Distribution of bikes hired per month") +
scale_x_continuous(breaks = c(20000, 40000, 60000),
labels = c('20K', '40K', '60K'))+
scale_y_continuous(breaks = c()) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank()) +
theme_bw()Look at May and Jun and compare 2020 with the previous years. What’s happening?
May and June in 2019 had much more days with bike rentals across the mean compared to 2020 with a distribution that looks flatter. Overall, it seems that May and June 2020 had less bike rentals in total, potentially attributable to bad weather. Similarly it is possible that Covid may have caused people to go outside less, resulting in a lot of days with low number of bike rentals.
However, the challenge I want you to work on is to reproduce the following two graphs.
The second one looks at percentage changes from the expected level of weekly rentals. The two grey shaded rectangles correspond to Q2 (weeks 14-26) and Q4 (weeks 40-52).
# average_weekly_bikes <- median(bike$bikes_hired, na.rm = TRUE) * 7
average_weekly_bikes <- bike %>%
filter(year %in% c("2016", "2017", "2018", "2019")) %>%
summarise(avg = median(bikes_hired, na.rm = TRUE) * 7)
bike_weekly <- bike %>%
group_by(year, week) %>%
summarise(diff = (sum(bikes_hired) - average_weekly_bikes$avg)/average_weekly_bikes$avg) %>%
filter(year %in% c("2016", "2017", "2018", "2019", "2020", "2021"), week <= 52)
tfl_colors <- c("grey" = "grey",
"below" = "red",
"above" = "green",
"positive" = "green",
"negative" = "red")
ggplot(bike_weekly, aes(x=week, y=diff)) +
geom_rect(aes(xmin = 14, xmax = 26, ymin = -Inf, ymax = Inf), fill = "grey", alpha = 0.01) +
geom_rect(aes(xmin = 40, xmax = 52, ymin = -Inf, ymax = Inf), fill = "grey", alpha = 0.01) +
geom_line() +
geom_ribbon(aes(ymin=0, ymax=pmin(diff,0), fill="below", alpha = 0.3)) +
geom_ribbon(aes(ymin=0, ymax=pmax(0, diff), fill="above", alpha = 0.3)) +
facet_wrap(~year) +
#scale_fill_manual(values = c('green', 'red')) +
theme_bw() +
labs(x="Week",
y="Deviation",
title="Weekly changes in TfL bike rentals",
subtitle = "% change from weekly averages calculated between 2010-2021"
) +
theme(legend.position = "none") +
geom_rug(data= subset(bike_weekly, diff >= 0), aes(color="positive"), sides="b") +
geom_rug(data= subset(bike_weekly, diff <= 0), aes(color="negative"), sides="b") +
scale_fill_manual(values = tfl_colors) +
scale_color_manual(values = tfl_colors) +
ylim(-0.5, 1) +
scale_y_continuous(labels = scales::percent)For both of these graphs, you have to calculate the expected number of rentals per week or month between 2016-2019 and then, see how each week/month of 2020-2021 compares to the expected rentals. Think of the calculation excess_rentals = actual_rentals - expected_rentals.
Should you use the mean or the median to calculate your expected rentals? Why?
The median makes more sense, as extreme results won’t affect it as the mean would do. For example in March 2018 we have an extreme result that would falsify the data.
In creating your plots, you may find these links useful:
Remember how we used the tidyqant package to download CPI data. In this exercise, I would like you to do the following:
tidyquant::tq_get(get = "economic.data", from = "2000-01-01") to get all data since January 1, 2000lag function, and specifically, year_change = value/lag(value, 12) - 1; this means you are comparing the current month’s value with that 12 months ago lag(value, 12).geom_smooth() for each component to get a sense of the overall trend. 1 You may want to colour the points according to whether yearly change was positive or negative.Having done this, you should get this graph.
url <- "https://fredaccount.stlouisfed.org/public/datalist/843"
tables <- url %>%
read_html() %>%
html_nodes(css="table")
cpi <- map(tables, . %>%
html_table(fill=TRUE)%>%
janitor::clean_names())
cpi_components <- cpi[[2]]
econ_data <- cpi_components %>%
select(series_id) %>%
pull() %>%
tq_get(get = "economic.data", from = "2000-01-01")
econ_data_detailed <- econ_data %>%
mutate(year_change = price / lag(price, 12) - 1)
avg_year_chg <- econ_data_detailed %>%
group_by(symbol) %>%
summarize(avg_year_change = mean(year_change, na.rm = TRUE))
econ_data_detailed <- left_join(econ_data_detailed, avg_year_chg, by = "symbol") %>%
arrange(desc(avg_year_change))
#econ_data_detailed[1] <- econ_data_detailed[econ_data_detailed$symbol == "CPIAUCSL"]
econ_data_detailed <- left_join(econ_data_detailed, cpi_components, by = c("symbol" = "series_id"))
econ_data_detailed$title <- str_replace(econ_data_detailed$title,
"Consumer Price Index for All Urban Consumers: ",
"")
econ_data_detailed$title <- str_replace(econ_data_detailed$title,
" in U.S. City Average",
"")
new_order <- c(unique(econ_data_detailed$title))
new_order2 <- c(new_order[30], new_order[-30])
econ_data_detailed <- arrange(transform(econ_data_detailed,
title = factor(title, levels = new_order2)), title)
ggplot(econ_data_detailed, aes(x=date, y = year_change)) +
geom_point(aes(color = factor(year_change <= 0), alpha = 0.1)) +
#geom_point(aes(color="negative", alpha = 0.1)) +
geom_smooth(type="loess", color="lightblue") +
facet_wrap(~title, scales="free") +
labs(x = "Year",
y = "YoY % Change",
title = "Yearly change of US CPI (All Items) and its components",
subtitle = "YoY change being positive or negative") +
scale_color_manual(values = c("red", "lightblue")) +
theme(legend.position = "none") +
scale_y_continuous(labels = scales::percent) +
theme_bw() +
theme(legend.position = "none")essential <- c("Housing", "Transportation", "Food and Beverages", "Apparel")
econ_data_detailed_important <- econ_data_detailed %>%
filter(title %in% essential)
econ_data_detailed_important <- arrange(transform(econ_data_detailed_important,
title = factor(title, levels = essential)), title)
ggplot(econ_data_detailed_important, aes(x=date, y = year_change)) +
geom_point(aes(color = factor(year_change <= 0), alpha = 0.1)) +
#geom_point(aes(color="negative", alpha = 0.1)) +
geom_smooth(type="loess", color="lightblue") +
facet_wrap(~title, scales="free") +
labs(x = "Year",
y = "YoY % Change",
title = "Yearly change of US CPI (All Items) and its components",
subtitle = "YoY change being positive or negative") +
scale_color_manual(values = c("red", "lightblue")) +
theme(legend.position = "none") +
scale_y_continuous(labels = scales::percent) +
theme_bw() +
theme(legend.position = "none")This graphs is fine, but perhaps has too many sub-categories. You can find the relative importance of components in the Consumer Price Indexes: U.S. city average, December 2020 here. Can you choose a smaller subset of the components you have and only list the major categories (Housing, Transportation, Food and beverages, Medical care, Education and communication, Recreation, and Apparel), sorted according to their relative importance?
We visited the website for the source data and noticed that our four chosen buckets are assinged the greatest CPI weight and therefore included them.
As usual, there is a lot of explanatory text, comments, etc. You do not need these, so delete them and produce a stand-alone document that you could share with someone. Knit the edited and completed R Markdown file as an HTML document (use the “Knit” button at the top of the script editor window) and upload it to Canvas.
Please seek out help when you need it, and remember the 15-minute rule. You know enough R (and have enough examples of code from class and your readings) to be able to do this. If you get stuck, ask for help from others, post a question on Slack– and remember that I am here to help too!
As a true test to yourself, do you understand the code you submitted and are you able to explain it to someone else?
Check minus (1/5): Displays minimal effort. Doesn’t complete all components. Code is poorly written and not documented. Uses the same type of plot for each graph, or doesn’t use plots appropriate for the variables being analyzed.
Check (3/5): Solid effort. Hits all the elements. No clear mistakes. Easy to follow (both the code and the output).
Check plus (5/5): Finished all components of the assignment correctly and addressed both challenges. Code is well-documented (both self-documented and with additional comments as necessary). Used tidyverse, instead of base R. Graphs and tables are properly labelled. Analysis is clear and easy to follow, either because graphs are labeled clearly or you’ve written additional text to describe how you interpret the output.